e ridge linear regression algorithm

e linear regression algorithm (RLR) was rooted from the

v regularisation [Tikhonov, 1963] and has been exercised for

ecades. It is still researched intensively nowadays for improving

mance and applicability in many areas [Hoerl, 1962; Tikhonov,

rugade and Kashid, 2010; Roozbeh, et al., 2020]. The reason that

a popular research subject is its incompatible feature in model

ation.

ntroduction of the Tikhonov regularisation is due to the ill-

which often happens to the pseudo inverse ሺ܆܆ሻି૚ in a

n model [Hoerl, 1962; Tikhonov, 1963]. The problem arises

܆܆ can be singular and therefore its inverse cannot be calculated.

khonov regularisation, a positive constant ߣ was introduced into

ula ܆܆൅ߣ۷ leading to a matrix, which is invertible all the times.

therefore replaced ܟෝൌሺ܆܆ሻି૚܆ܡ by the following format as

on to a RLR model,

ܟෝൌሺ܆܆൅ߣ۷ሻି૚܆ܡ

(4.35)

bove equation also satisfies the Lagrange multiplier [Kalman,

here c is a positive constant,

min

ܟሺܡെ܆ܟሻሺܡെ܆ܟሻ൅ߣሺܟܟെܿሻ

(4.36)

other view, RLR is also connected with BLR. Applying negative

m to the Equation (4.32) used in BLR leads to the following

where C is a constant,

ܟ|ܡ, ܆ሻ∝ߚ

2 ܟെܡ|൅ߙ

2 ܟܟെܰ

2 logߚെ݀

2 logࢻ

൅ܥ

(4.37)

riting the above equation results in RLR, in which ߣ~ ߙߚ

,